- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003100000000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Zhong, Ruiqi (4)
-
Steinhardt, Jacob (3)
-
Chen, Yanda (1)
-
Darrell, Trevor (1)
-
Dunlap, Lisa (1)
-
Ghosh, Dhruba (1)
-
Gonzalez, Joseph E (1)
-
He, He (1)
-
Klein, Dan (1)
-
Mckeown, Kathleen (1)
-
Ri, Narutatsu (1)
-
Shang, Jingbo (1)
-
Wang, Xiaohan (1)
-
Wang, Zihan (1)
-
Yeung-Levy, Serena (1)
-
Yu, Zhou (1)
-
Zhang, Yuhui (1)
-
Zhao, Chen (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Wang, Zihan; Shang, Jingbo; Zhong, Ruiqi (, ACL SIGDAT Empirical Methods in Natural Language Processing (EMNLP) 2023)
-
Dunlap, Lisa; Zhang, Yuhui; Wang, Xiaohan; Zhong, Ruiqi; Darrell, Trevor; Steinhardt, Jacob; Gonzalez, Joseph E; Yeung-Levy, Serena (, CVPR 2024)
-
Zhong, Ruiqi; Ghosh, Dhruba; Klein, Dan; Steinhardt, Jacob (, Transactions of the Association for Computational Linguistics)null (Ed.)Larger language models have higher accuracy on average, but are they better on every single instance (datapoint)? Some work suggests larger models have higher out-of-distribution robustness, while other work suggests they have lower accuracy on rare subgroups. To understand these differences, we investigate these models at the level of individual instances. However, one major challenge is that individual predictions are highly sensitive to noise in the randomness in training. We develop statistically rigorous methods to address this, and after accounting for pretraining and finetuning noise, we find that our BERT-Large is worse than BERT-Mini on at least 1-4% of instances across MNLI, SST-2, and QQP, compared to the overall accuracy improvement of 2-10%. We also find that finetuning noise increases with model size and that instance-level accuracy has momentum: improvement from BERT-Mini to BERT-Medium correlates with improvement from BERT-Medium to BERT-Large. Our findings suggest that instance-level predictions provide a rich source of information; we therefore, recommend that researchers supplement model weights with model predictions.more » « less
An official website of the United States government

Full Text Available